Efforts of using the known tools of artificial intelligence were not successful in many cases in the past. That is, because methods of artificial intelligence are based on knowledge extraction of human skills in a subjective and creative domain -model building. Besides these aspects, in such a way it's not possible to solve the significant problems of modelling for complex systems such as inadequate a priori information, great number of unmeasurable variables, noised and extremly short data samples and ill-defined objects with fuzzy characteristics. In this case knowledge extraction from data, i.e. to derive a model from experimental measurements using inductive methods has advantages in cases of rather complex objects having only a few a priori knowledge.
One development direction that take up the practical demands represents the self-organization of mathematical models which is realizable by means of statistical learning networks like GMDH algorithms.
f(xi, xj) = a0+a1xi+a2xj+a3xixj+a4xi2+a5xj2,
using various selection criteria like the PESS criterion. In distinction to classical algorithms this one has the ability to synthesize linear or nonlinear models of optimal complexity depending on the object structure and a meaning reduction of model complexity related to the existing noise level of the data. This results in a more flexible modelling in each layer because the partial models could consist of no any (y=a0), one or both input variables of every possible combination depending on their actual contribution. The aim is to avoid for short and very noisy data samples inclusion of redundant variables in modelling which, ones part of the model, couldn't be excluded afterwards. So, at the end it could be expected to get simpler models.
A. Model of the dependence of the decision from the variables
There were generated linear yM= ∑i aixi and nonlinear static models of the decision variable from the 19 characteristics xi. The decision variable has been set related to the decision to +1 („positive") or -1 („negative"). All obtained models has extracted the variables x5, x8, x10, x15 as significant, e.g.
yM= -3,4528 + 0,1174 x5 + 0,1701 x15 - 0,551 x8 + 1,311 x10.
These four variables could be interpreted as the main decision variables.
B. Modelling of independent systems of equations
An other and more expending way is the generation of linear respective nonlinear systems of equations separately for all positive and negative decisions. In the case of linear models it is
x+=A+x+ ; x-=A-x- ; A={aij} , with aii=0.
Such systems better grasp the spectrum of decisions because they have a greater breadth of variation and could be interpreted, too. Then, the corresponding model values xi+/ xi- will be calculated for the checking set variables xic . The membership to class + or - was decided on the basis of the deviations Δi+ = xic - xi+ respectively Δi- = xic - xi- . The results in table I have been obtained in the following cases:
TABLE I Classifications obtained from systems of equations | ||||||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
a. s+=∑i |Δi+ |; s-=∑i= |Δi- |.
b. s+=∑iŒN |Δi+ |; s-=∑iŒN |Δi- |,
in which N is the set of indices of those variables having influence in model A.
c. s+=∑iŒM+ Δi+ xic ; s-=∑iŒM- Δi- xic , in which M+, M- are the sets of indices of those input variables the best fitting models were obtained (for positive and negative decisions).
d. A next way for decision making is to calculate for the variables xic their deviation Δi+ and Δi- and classify each variable on the basis of the minimum deviation. The final decision is made as a sum of all classifications.
C. Synthesis
A synthesis of different classifications enables to better describe the wide spectrum of possible decisions without lost of the explanation component. In table II a synthesis on the basis of majority decisions is shown.
TABLE II Synthesis of different classifications | ||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4. CONCLUSIONS
2. R. Bischoff, C. Bleile and J. Graalfs, Wirtschaftsinformatik, 33(1991) 4.